Goto

Collaborating Authors

 fast deep reinforcement


Fast deep reinforcement learning using online adjustments from the past

Neural Information Processing Systems

We propose Ephemeral Value Adjusments (EVA): a means of allowing deep reinforcement learning agents to rapidly adapt to experience in their replay buffer. EVA shifts the value predicted by a neural network with an estimate of the value function found by prioritised sweeping over experience tuples from the replay buffer near the current state. EVA combines a number of recent ideas around combining episodic memory-like structures into reinforcement learning agents: slot-based storage, content-based retrieval, and memory-based planning. We show that EVA is performant on a demonstration task and Atari games.


Reviews: Fast deep reinforcement learning using online adjustments from the past

Neural Information Processing Systems

Summary: This paper proposes a method that can help an RL agent to rapidly adapt to experience in the replay buffer. The method is the combination of slow and general component (i.e. An interesting part of this proposed approach is that it slightly changes the replay buffer by adding trajectory information but get a good boost in the perfoamnce. In addition, the extensive number of experiments have been conducted in order to verify the claim of this paper. Comments and Questions - This paper, in general, is well-written (especially the related works, they actually talk about the relation and difference with previous works) except for the followings: -- Section 3.1 and 3.2 do not have smooth flow.


Fast deep reinforcement learning using online adjustments from the past

Hansen, Steven, Pritzel, Alexander, Sprechmann, Pablo, Barreto, Andre, Blundell, Charles

Neural Information Processing Systems

We propose Ephemeral Value Adjusments (EVA): a means of allowing deep reinforcement learning agents to rapidly adapt to experience in their replay buffer. EVA shifts the value predicted by a neural network with an estimate of the value function found by prioritised sweeping over experience tuples from the replay buffer near the current state. EVA combines a number of recent ideas around combining episodic memory-like structures into reinforcement learning agents: slot-based storage, content-based retrieval, and memory-based planning. We show that EVA is performant on a demonstration task and Atari games. Papers published at the Neural Information Processing Systems Conference.